An intelligence quotient, or IQ, is a score derived from one of several different standardized tests designed to assess intelligence. The term "IQ," from the German Intelligenz-Quotient, was devised by the German psychologist William Stern in 1912 as a proposed method of scoring children's intelligence tests such as those developed by Alfred Binet and Théodore Simon in the early 20th Century.[1] Lewis Terman accepted that form of scoring, expressing a score as a quotient of "mental age" and "chronological age," for his revision of the Binet-Simon test,[2] the first version of the Stanford-Binet Intelligence Scales.
Although the term "IQ" is still in common use, the scoring of modern IQ tests such as the Wechsler Adult Intelligence Scale is now based on standard scoring of the subject's rank order on the test item content with the median score set to 100, and a standard deviation of 15, although not all tests adhere to that assignment of 15 IQ points to each standard deviation.
IQ scores have been shown to be associated with such factors as morbidity and mortality, parental social status,[3] and, to a substantial degree, parental IQ. While the heritability of IQ has been investigated for nearly a century, controversy remains regarding the significance of heritability estimates,[4][5] and the mechanisms of inheritance are still a matter of some debate.[6]
IQ scores are used in many contexts: as predictors of educational achievement or special needs, by social scientists who study the distribution of IQ scores in populations and the relationships between IQ score and other variables, and as predictors of job performance and income.
The average IQ scores for many populations have been rising at an average rate of three points per decade since the early 20th century, a phenomenon called the Flynn effect. It is disputed whether these changes in scores reflect real changes in intellectual abilities, or merely methodological problems with past or present testing.
Contents |
Englishman Francis Galton, influenced by Darwinism, founded eugenics (and later psychometrics) to measure differences between upper and lower classes.[7] Galton argued that due to heredity, white aristocrats were intellectually superior to other humans; nurture, meanwhile, played a lesser role in intellectual capacity.[8]
French psychologist Alfred Binet gave more weight to nurture, arguing that intelligence could be improved.[9] Binet and fellow French psychologist Théodore Simon later developed the Binet-Simon test for measuring intellectual development or mental age to diagnose French children in need of special assistance classes.[10] Binet-Simon tasks were diverse in order to neutralize individual differences in types of intelligence and provide a general measure.
English psychologist Charles Spearman, however, did not believe there were different types of intelligence.[11] He devised a correlational formula to define a common intellective factor (which he called "general intelligence"), a factor that Binet argued did not exist.[12]
In 1910 the eugenics movement in the USA seized on the Binet-Simon test as means to give them credibility in diagnosing mental retardation after American psychologist Henry H. Goddard published a translation of it that same year.[13] American psychologist Lewis Terman revised the Binet-Simon scale, implementing German psychologist William Stern’s intelligence quotient (I.Q.) ratio of mental age to chronological age and used the resulting Stanford-Binet scale as a measure of general intelligence.[14]
A team of psychologists led by eugenicist Robert Yerkes assisted the US army to rapidly assess and assign huge numbers of personnel.[15] They developed two group-administered intelligence tests for this purpose: an alpha test for literate personnel; and a beta for the illiterate. They successfully tested 1,726,000 recruits.[16] However, there was too little time for validity testing[17] and not enough staff to administer tests correctly.[18] Many officers distrusted the psychologists, accusing them of conducting research for their own purposes[18] using culturally biased tests.[19] Orders were issued that low test scores should not bar men from officer training and that a disability board should consider all discharges.[20] The outcome was that 0.5% of recruits were discharged as mentally inferior, whilst Yerkes would have preferred to have discharged 3% whose results showed a mental age of under 10.[16]
In contrast, another team of psychologists led by Walter Dill Scott defined intelligence as a diverse complex of capacities, as Binet had. They stressed an individual differences approach emphasizing environmental adjustments of mental qualities as functions.[12] Soon they developed a rating scale to classify and place enlisted men according to how they used their intelligence, not their base levels of intelligence.[19] The army was enthusiastic about Scott’s classification of personnel. By the time the war ended they had already been incorporated into the military[18] and had classified 3½ million men and assigned 973,858 to technical units.[16]
The army abolished Yerkes’ team after the war but employed two psychologists to continue intelligence testing research.[18] However, a great deal of positive post war publicity on army psychological testing helped to make psychology a respected field.[21] Subsequently there was an increase in jobs and funding in psychology.[22] Group intelligence tests were developed for and became widely used in both primary and secondary schools, universities and industry.[18] But controversy followed when one of Yerkes’ team, Goddard, admitted that they had been guilty of bad logic in finding that 45% of army recruits had a mental age of 12 or less.[23] In addition, there was no agreement on what the intelligence tests measured[24][25][26], and the publication of racial differences in scores[27] led to much criticism that environmental influences on score had not been considered.[24][28]
The modern IQ scale is a mathematical transformation of a raw score on an IQ test, based on the rank of that score in a normalization sample. This procedure was pioneered by David Wechsler. Modern scores are sometimes referred to as "deviation IQs," while older method age-specific scores are referred to as "ratio IQs."[30] IQ scales of either kind are ordinal scales and thus IQ points are not units of measurement.[31][32][33][34]
The two scoring methodologies yield similar results near the population median, but the older ratio IQs yielded far higher scores for high-scoring (gifted) children. Current IQ tests do not yield such high scores, as recent validation studies have shown that concerns about ceiling effects in identifying gifted children are exaggerated.[35] In any event error of estimation is greatest for scores above the population median[36] and likely to overestimate the true score (in the sense of formal testing theory) of a test-taker who obtains a score above the median.[37]
Since the publication of the Wechsler Adult Intelligence Scale (WAIS), almost all intelligence scales have adopted the deviation method of scoring. The use of this scoring method makes the term "intelligence quotient" an inaccurate description, mathematically speaking, but the term "IQ" still enjoys colloquial usage,[38] and is used to label the standard scoring of most cognitive ability tests currently in use.
Pupil | KABC-II | WISC-III | WJ-III |
---|---|---|---|
Asher | 90 | 95 | 111 |
Brianna | 125 | 110 | 105 |
Colin | 100 | 93 | 101 |
Danica | 116 | 127 | 118 |
Elpha | 93 | 105 | 93 |
Fritz | 106 | 105 | 105 |
Georgi | 95 | 100 | 90 |
Hector | 112 | 113 | 103 |
Imelda | 104 | 96 | 97 |
Jose | 101 | 99 | 86 |
Keoku | 81 | 78 | 75 |
Leo | 116 | 124 | 102 |
Psychometricians generally regard IQ tests as highly reliable (in the technical sense of testing theory) and clinical psychologists generally regard them as having sufficient validity for many clinical purposes. A test-taker's score on any one IQ test is surrounded by an error band that shows, to a specified degree of confidence, what the test-taker's true score (in that same technical sense) is likely to be. Test-takers can have varying scores on differing occasions of taking IQ tests and can vary in scores on different IQ tests taken at the same age.[40][41][42]
Since the twentieth century, IQ scores have increased at an average rate of around three IQ points per decade in most parts of the world.[43] This phenomenon has been named the Flynn effect (a.k.a. the "Lynn-Flynn effect") named after Richard Lynn and James R. Flynn. Attempted explanations have included improved nutrition, a trend towards smaller families, better education, greater environmental complexity, and heterosis. Some researchers believe that modern education has become more geared toward IQ tests, thus rendering higher scores, but not necessarily higher intelligence.[44] As a result, tests are routinely renormalized to obtain mean scores of 100, for example WISC-R (1974), WISC-III (1991) and WISC-IV (2003). This adjustment specifically addresses the variation over time, allowing scores to be compared longitudinally.
Some researchers argue that the Flynn effect may have ended in some developed nations, starting in the 1980s in the United Kingdom[45], and in the mid-1990s in Denmark[46] and in Norway.[47]
Environmental factors play a role in determining IQ. Proper childhood nutrition appears critical for cognitive development; malnutrition can lower IQ. For example, iodine deficiency causes a fall, in average, of 12 IQ points.[48] It is expected that average IQ in third world countries will increase dramatically if the deficiencies of iodine and other micronutrients are eradicated.
A recent study found that the FADS2 gene, along with breastfeeding, adds about seven IQ points to those with the "C" version of the gene. Those with the "G" version of the FADS2 gene see no advantage.[49][50]
Musical training in childhood may also increase IQ.[51] Recent studies have shown that training in using one's working memory may increase IQ.[52][53]
Some studies in the developed world have shown that inherited personality traits cause non-related children raised in the same family ("adoptive siblings") to be as different as children raised in different families.[54][55] There are some family effects on the IQ of children, accounting for up to a quarter of the variance; however, by adulthood, this correlation approaches zero.[56] For IQ, adoption studies show that, after adolescence, adoptive siblings are no more similar in IQ than strangers (IQ correlation near zero), while full siblings show an IQ correlation of 0.6. Twin studies reinforce this pattern: monozygotic (identical) twins raised separately are highly similar in IQ (0.86), more so than dizygotic (fraternal) twins raised together (0.6) and much more than adoptive siblings.[54] However these results make no correction for the social and emotional effects frequently associated with adoption.
Stoolmiller (1999)[57] found that the range restriction of family environments that goes with adoption, that adopting families tend to be more similar on, for example, socio-economic status than the general population, suggests a possible underestimation of the role of the shared family environment in previous studies. Corrections for range correction applied to adoption studies indicate that socio-economic status could account for as much as 50% of the variance in IQ.[57] However, the effect of restriction of range on IQ for adoption studies was examined by Matt McGue and colleagues, who wrote that "restriction in range in parent disinhibitory psychopathology and family socio-economic status had no effect on adoptive-sibling correlations [in] IQ".[58]
Eric Turkheimer and colleagues (2003)[59] studied the heritability of IQ in impoverished US families. Results demonstrated that in small children the proportions of IQ variance attributable to genes and environment vary nonlinearly with socio-economic status. The models suggest that in impoverished families, 60% of the variance in early childhood IQ is accounted for by the shared family environment, and the contribution of genes is close to zero; in affluent families, the result is almost exactly the reverse.[60] They suggest that the role of shared environmental factors may have been underestimated in older studies which often only studied affluent middle class families.[61]
A meta-analysis, by Devlin and colleagues in Nature (1997),[6] of 212 previous studies evaluated an alternative model for environmental influence and found that it fits the data better than the 'family-environments' model commonly used. The shared maternal (fetal) environment effects, often assumed to be negligible, account for 20% of covariance between twins and 5% between siblings, and the effects of genes are correspondingly reduced, with two measures of heritability being less than 50%.
Bouchard and McGue reviewed the literature in 2003, arguing that Devlin's conclusions about the magnitude of heritability is not substantially different from previous reports and that their conclusions regarding prenatal effects stands in contradiction to many previous reports.[62] They write that:
Chipuer et al. and Loehlin conclude that the postnatal rather than the prenatal environment is most important. The Devlin et al. conclusion that the prenatal environment contributes to twin IQ similarity is especially remarkable given the existence of an extensive empirical literature on prenatal effects. Price (1950), in a comprehensive review published over 50 years ago, argued that almost all MZ twin prenatal effects produced differences rather than similarities. As of 1950 the literature on the topic was so large that the entire bibliography was not published. It was finally published in 1978 with an additional 260 references. At that time Price reiterated his earlier conclusion. Research subsequent to the 1978 review largely reinforces Price’s hypothesis.
Dickens and Flynn[63] postulate that the arguments regarding the disappearance of the shared family environment should apply equally well to groups separated in time. This is contradicted by the Flynn effect. Changes here have happened too quickly to be explained by genetic heritable adaptation. This paradox can be explained by observing that the measure "heritability" includes both a direct effect of the genotype on IQ and also indirect effects where the genotype changes the environment, in turn affecting IQ. That is, those with a higher IQ tend to seek out stimulating environments that further increase IQ. The direct effect can initially have been very small but feedback loops can create large differences in IQ. In their model an environmental stimulus can have a very large effect on IQ, even in adults, but this effect also decays over time unless the stimulus continues (the model could be adapted to include possible factors, like nutrition in early childhood, that may cause permanent effects). The Flynn effect can be explained by a generally more stimulating environment for all people. The authors suggest that programs aiming to increase IQ would be most likely to produce long-term IQ gains if they taught children how to replicate outside the program the kinds of cognitively demanding experiences that produce IQ gains while they are in the program and motivate them to persist in that replication long after they have left the program.[63][64]
Various studies find the heritability of IQ between 0.4 and 0.8 in the United States;[65][66][67] that is, depending on the study, a little less than half to substantially more than half of the variation in IQ among the children studied was estimated to be due to genetic variation. Heritability measures the proportion of variation that can be attributed to genes within any measured population (however defined), and not the extent that genes contribute to intelligence. It should also be noted that the idea that genetic and environmental influences on intelligence are independent is not necessarily accepted, or as Jeremy Freese says "The lucidity of Lewontin's arguments has historically proven no match for the allure of overly simple characterizations of outcomes as being x% due to genes and (1 – x)% not due to genes". That means that genes affect environment and environment affects genes.[68][69][70][71][72][73] A heritability in the range of 0.4 to 0.8 implies that IQ is "substantially" heritable, that is, some substantial part of the variation within a population is caused at least in part by genes.
The effect of restriction of range on IQ was examined by Matt McGue and colleagues, who wrote that "restriction in range in parent disinhibitory psychopathology and family SES had no effect on adoptive-sibling correlations ... IQ."[58] On the other hand, a 2003 study by Eric Turkheimer, Andreana Haley, Mary Waldron, Brian D'Onofrio, Irving I. Gottesman demonstrated that the proportions of IQ variance attributable to genes and environment vary with socioeconomic status. They found that in impoverished families, 60% of the variance in IQ "in a sample of 7-year-old twins" is accounted for by the shared environment, and the contribution of genes was close to zero.[74]
It is reasonable to expect that genetic influences on traits like IQ should become less important as one gains experiences with age. However, the opposite occurs.[75] Heritability measures in infancy are as low as 20%, around 40% in middle childhood, and as high as 80% in adulthood.[75] The American Psychological Association's 1995 task force on "Intelligence: Knowns and Unknowns" concluded that within the white population the heritability of IQ is "around .75." The Minnesota Study of Twins Reared Apart, a multiyear study of 100 sets of reared-apart twins which was started in 1979, concluded that about 70% of the variance in IQ was found to be associated with genetic variation. Some of the correlation of IQs of twins may be a result of the effect of the maternal environment before birth, shedding some light on why IQ correlation between twins reared apart is so robust.[6] There are a number of points to consider when interpreting heritability:
In 2004, Richard Haier, professor of psychology in the Department of Pediatrics and colleagues at University of California, Irvine and the University of New Mexico used MRI to obtain structural images of the brain in 47 normal adults who also took standard IQ tests. The study demonstrated that general human intelligence appears to be based on the volume and location of gray matter tissue in the brain, and also demonstrated that, of the brain's gray matter, only about 6 percent appeared to be related to IQ.[77]
Many different sources of information have converged on the view that the frontal lobes are critical for fluid intelligence. Patients with damage to the frontal lobe are impaired on fluid intelligence tests (Duncan et al. 1995). The volume of frontal grey (Thompson et al. 2001) and white matter (Schoenemann et al. 2005) have also been associated with general intelligence. In addition, recent neuroimaging studies have limited this association to the lateral prefrontal cortex. Duncan and colleagues (2000) showed using Positron Emission Tomography that problem-solving tasks that correlated more highly with IQ also activate the lateral prefrontal cortex. More recently, Gray and colleagues (2003) used functional magnetic resonance imaging (fMRI) to show that those individuals that were more adept at resisting distraction on a demanding working memory task had both a higher IQ and increased prefrontal activity. For an extensive review of this topic, see Gray and Thompson (2004).[78]
A study involving 307 children (age between six to nineteen) measuring the size of brain structures using magnetic resonance imaging (MRI) and measuring verbal and non-verbal abilities has been conducted (Shaw et al. 2006). The study has indicated that there is a relationship between IQ and the structure of the cortex—the characteristic change being the group with the superior IQ scores starts with thinner cortex in the early age then becomes thicker than average by the late teens.[79]
There is "a highly significant association" between the CHRM2 gene and intelligence according to a 2006 Dutch family study. The study concluded that there was an association between the CHRM2 gene on chromosome 7 and Performance IQ, as measured by the Wechsler Adult Intelligence Scale-Revised. The Dutch family study used a sample of 667 individuals from 304 families.[80] A similar association was found independently in the Minnesota Twin and Family Study (Comings et al. 2003) and by the Department of Psychiatry at the Washington University.[81]
Significant injuries isolated to one side of the brain, especially those occurring at a young age, may not significantly affect IQ.[82]
Studies reach conflicting conclusions regarding the controversial idea that brain size correlates positively with IQ. Jensen and Reed claim no direct correlation exists in nonpathological subjects.[83] A more recent meta-analysis suggests otherwise.[84]
An alternative approach has sought to link differences in neural plasticity with intelligence,[85] and this view has recently received some empirical support.[86]
Though generally believed to be immutable, recent research suggests that certain mental activities can change the brain's ability to process information, leading to the conclusion that intelligence can be altered or changed over time. The brain is now properly understood to be neuroplastic and hence far more amenable to change than once was thought. Studies into the neuroscience of animals indicate that challenging activities can produce changes in gene expression patterns of the brain. (Training Degus to Use Rakes [87] and Iriki's earlier research with macaque monkeys indicating brain changes.)
A study on young adults published in April 2008 by a team from the Universities of Michigan and Bern supports the possibility of the transfer of fluid intelligence from specifically designed working memory training.[88] Further research will be needed to determine nature, extent and duration of the proposed transfer:[89] Among other questions, it remains to be seen whether the results extend to other kinds of fluid intelligence tests than the matrix test used in the study, and if so, whether, after training, fluid intelligence measures retain their correlation with educational and occupational achievement or if the value of fluid intelligence for predicting performance on other tasks changes. It is also unclear whether the training is durable of extended periods of time.
The peak capacity for both fluid intelligence and crystallized intelligence is 26 years. This is followed by a slow decline.[90]
Among the most controversial issues related to the study of intelligence is the observation that intelligence measures such as IQ scores vary between populations. While there is little scholarly debate about the existence of some of these differences, the reasons remain highly controversial both within academia and in the public sphere.
While IQ is sometimes treated as an end unto itself, scholarly work on IQ focuses to a large extent on IQ's validity, that is, the degree to which IQ correlates with outcomes such as job performance, social pathologies, or academic achievement. Different IQ tests differ in their validity for various outcomes. Traditionally, correlation for IQ and outcomes is viewed as a means to also predict performance; however readers should distinguish between prediction in the hard sciences and the social sciences.
People with a higher IQ have generally lower adult morbidity and mortality. Post-Traumatic Stress Disorder,[91] and schizophrenia[92][93] are less prevalent in higher IQ bands. People in the midsts of a major depressive episode have been shown to have a lower IQ than when without symptoms and lower cognitive ability than people without depression of equivalent verbal intelligence.[94][95]
A study of 11,282 individuals in Scotland who took intelligence tests at ages 7, 9 and 11 in the 1950s and 1960s, found an "inverse linear association" between childhood IQ scores and hospital admissions for injuries in adulthood. The association between childhood IQ and the risk of later injury remained even after accounting for factors such as the child's socioeconomic background.[96] Research in Scotland has also shown that a 15-point lower IQ meant people had a fifth less chance of living to 76, while those with a 30-point disadvantage were 37% less likely than those with a higher IQ to live that long.[97]
A decrease in IQ has also been shown as an early predictor of late-onset Alzheimer's Disease and other forms of dementia. In a 2004 study, Cervilla and colleagues showed that tests of cognitive ability provide useful predictive information up to a decade before the onset of dementia.[98] However, when diagnosing individuals with a higher level of cognitive ability, in this study those with IQs of 120 or more,[99] patients should not be diagnosed from the standard norm but from an adjusted high-IQ norm that measured changes against the individual's higher ability level. In 2000, Whalley and colleagues published a paper in the journal Neurology, which examined links between childhood mental ability and late-onset dementia. The study showed that mental ability scores were significantly lower in children who eventually developed late-onset dementia when compared with other children tested.[100]
Several factors can lead to significant cognitive impairment, particularly if they occur during pregnancy and childhood when the brain is growing and the blood-brain barrier is less effective. Such impairment may sometimes be permanent, or may sometimes be partially or wholly compensated for by later growth. Several harmful factors may also combine, possibly causing greater impairment.
Developed nations have implemented several health policies regarding nutrients and toxins known to influence cognitive function. These include laws requiring fortification of certain food products and laws establishing safe levels of pollutants (e.g. lead, mercury, and organochlorides). Comprehensive policy recommendations targeting reduction of cognitive impairment in children have been proposed.[101]
In terms of the effect of one's intelligence on health, in one British study, high childhood IQ was shown to correlate with one's chance of becoming a vegetarian in adulthood.[102] In another British study, high childhood IQ was shown to inversely correlate with the chances of smoking.[103]
Men and women have statistically significant differences in average scores on tests of particular abilities.[104][105] Studies also illustrate consistently greater variance in the performance of men compared to that of women.[106]
IQ tests are weighted on these sex differences so there is no bias on average in favor of one sex, however the consistent difference in variance is not removed. Because the tests are defined so there is no average difference it is difficult to put any meaning on a statement that one sex has a higher intelligence than the other. However some people have made claims like this even using unbiased IQ tests. For instance, there are claims that men tend to outperform women on average by three to four IQ points based on tests of medical students where the greater variance of men's IQ can be expected to contribute to the result,[107] or where a 'correction' is made for different maturation ages.[108]
The 1996 Task Force investigation on Intelligence sponsored by the American Psychological Association concluded that there are significant variations in I.Q. across races.[66] The problem of determining the causes underlying this variation relates to the question of the contributions of "nature and nurture" to I.Q. Most scientists believe there are insufficient data to resolve the contributions of heredity and environment. One of the most notable researchers arguing for a strong hereditary basis is Arthur Jensen. In contrast, Richard Nisbett , the long-time director of the Culture and Cognition program at the University of Michigan, argues that intelligence is a matter of environment and biased standards that praise a certain type of “intelligence” (success on standardized tests) over another.
One study found a correlation of .82 between g (general intelligence factor) and SAT scores;[109] another has found correlation of .81 between g and GCSE scores.[110]
Correlations between IQ scores (general cognitive ability) and achievement test scores are reported to be .81 by Deary and colleagues, with the percentage of variance accounted for by general cognitive ability ranging "from 58.6% in Mathematics and 48% in English to 18.1% in Art and Design".[110]
The American Psychological Association's report Intelligence: Knowns and Unknowns[66] states that wherever it has been studied, children with high scores on tests of intelligence tend to learn more of what is taught in school than their lower-scoring peers. The correlation between IQ scores and grades is about .50. However, this means that they explain only 25% of the variance. Achieving good grades depends on many factors other than IQ, such as "persistence, interest in school, and willingness to study" (p. 81).
According to Frank Schmidt and John Hunter, "for hiring employees without previous experience in the job the most valid predictor of future performance is general mental ability."[111] The validity of IQ as a predictor of job performance is above zero for all work studied to date, but varies with the type of job and across different studies, ranging from 0.2 to 0.6.[112] While IQ is more strongly correlated with reasoning and less so with motor function [113] IQ-test scores predict performance ratings in all occupations.[111] That said, for highly qualified activities (research, management) low IQ scores are more likely to be a barrier to adequate performance, whereas for minimally-skilled activities, athletic strength (manual strength, speed, stamina, and coordination) are more likely to influence performance.[111] It is largely mediated through the quicker acquisition of job-relevant knowledge that IQ predicts job performance.
In establishing a causal direction to the link between IQ and work performance, longitudinal studies by Watkins and others suggest that IQ exerts a causal influence on future academic achievement, whereas academic achievement does not substantially influence future IQ scores.[114] Treena Eileen Rohde and Lee Anne Thompson write that general cognitive ability but not specific ability scores predict academic achievement, with the exception that processing speed and spatial ability predict performance on the SAT math beyond the effect of general cognitive ability.[115]
The American Psychological Association's report Intelligence: Knowns and Unknowns[66] states that other individual characteristics such as interpersonal skills, aspects of personality etc. are probably of equal or greater importance, but at this point we do not have equally reliable instruments to measure them.[66] More recently, others argue that since most professional tasks are now standardized or automated, and ranked IQ is a stable measurement over time with high correlation with many positive personal traits from the general population, it is the best tool to help determine hiring and job placement at any stage in a career, independently of experience, personality bias or any formal training one may acquire.
Some researchers claim that "in economic terms it appears that the IQ score measures something with decreasing marginal value. It is important to have enough of it, but having lots and lots does not buy you that much."[116][117]
Other studies show that ability and performance for jobs are linearly related, such that at all IQ levels, an increase in IQ translates into a concomitant increase in performance.[118] Charles Murray, coauthor of The Bell Curve, found that IQ has a substantial effect on income independently of family background.[119]
Taking the above two principles together, very high IQ produces very high job performance, but no greater income than slightly high IQ. Studies also show that high IQ is related to higher net worth.[120]
The American Psychological Association's report Intelligence: Knowns and Unknowns[66] states that IQ scores account for about one-fourth of the social status variance and one-sixth of the income variance. Statistical controls for parental SES eliminate about a quarter of this predictive power. Psychometric intelligence appears as only one of a great many factors that influence social outcomes.[66]
Some studies claim that IQ only accounts for a sixth of the variation in income because many studies are based on young adults (many of whom have not yet completed their education). On pg 568 of The g Factor, Arthur Jensen claims that although the correlation between IQ and income averages a moderate 0.4 (one sixth or 16% of the variance), the relationship increases with age, and peaks at middle age when people have reached their maximum career potential. In the book, A Question of Intelligence, Daniel Seligman cites an IQ income correlation of 0.5 (25% of the variance).
A 2002 study[121] further examined the impact of non-IQ factors on income and concluded that an individual's location, inherited wealth, race, and schooling are more important as factors in determining income than IQ.
In addition, IQ and its correlation to health, violent crime, gross state product, and government effectiveness are the subject of a 2006 paper in the publication Intelligence. The paper breaks down IQ averages by U.S. states using the federal government's National Assessment of Educational Progress math and reading test scores as a source.[122]
There is a correlation of -0.19 between IQ scores and number of juvenile offences in a large Danish sample; with social class controlled, the correlation dropped to -0.17. Similarly, the correlations for most "negative outcome" variables are typically smaller than 0.20, which means that test scores are associated with less than 4% of their total variance. It is important to realize that the causal links between psychometric ability and social outcomes may be indirect. Children with poor scholastic performance may feel alienated. Consequently, they may be more likely to engage in delinquent behavior, compared to other children who do well.[66]
IQ is also negatively correlated with certain diseases.
Tambs et al.[123] found that occupational status, educational attainment, and IQ are individually heritable; and further found that "genetic variance influencing educational attainment ... contributed approximately one-fourth of the genetic variance for occupational status and nearly half the genetic variance for IQ." In a sample of U.S. siblings, Rowe et al.[124] report that the inequality in education and income was predominantly due to genes, with shared environmental factors playing a subordinate role.
In 2008, intelligence researcher Helmuth Nyborg examined whether IQ relates to denomination and income, using representative data from the National Longitudinal Study of Youth. His results demonstrated that on average, Atheists scored 1.95 IQ points higher than Agnostics, 3.82 points higher than Liberal persuasions, and 5.89 IQ points higher than Dogmatic persuasions. [125]
In the United States, certain public policies and laws regarding military service,[126] [127] education, public benefits,[128] capital punishment[129], and employment incorporate an individual's IQ into their decisions. However, in the case of Griggs v. Duke Power Co. in 1971, for the purpose of minimizing employment practices that disparately impacted racial minorities, the U.S. Supreme Court banned the use of IQ tests in employment, except in very rare cases.[130] Internationally, certain public policies, such as improving nutrition and prohibiting neurotoxins, have as one of their goals raising, or preventing a decline in, intelligence.
Alfred Binet, a French psychologist, did not believe that IQ test scales qualified to measure intelligence. He neither invented the term "intelligence quotient" nor supported its numerical expression. He stated:
The scale, properly speaking, does not permit the measure of intelligence, because intellectual qualities are not superposable, and therefore cannot be measured as linear surfaces are measured.—Binet, 1905
Binet had designed the Binet-Simon intelligence scale in order to identify students who needed special help in coping with the school curriculum. He argued that with proper remedial education programs, most students regardless of background could catch up and perform quite well in school. He did not believe that intelligence was a measurable fixed entity.
Binet cautioned:
Some recent thinkers seem to have given their moral support to these deplorable verdicts by affirming that an individual's intelligence is a fixed quantity, a quantity that cannot be increased. We must protest and react against this brutal pessimism; we must try to demonstrate that it is founded on nothing.[131]
Some scientists dispute psychometrics entirely. In The Mismeasure of Man, Harvard professor and paleontologist Stephen Jay Gould argued that intelligence tests were based on faulty assumptions and showed their history of being used as the basis for scientific racism. He criticized
…the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status.(pp. 24–25)
He spends much of the book criticizing the concept of IQ, including a historical discussion of how the IQ tests were created and a technical discussion of why g is simply a mathematical artifact. Later editions of the book included criticism of The Bell Curve.
Bernard Davis wrote that while the nonscientific reviews of The Mismeasure of Man were almost uniformly laudatory, the reviews in the scientific journals were almost all highly critical. [132]
According to Dr. C. George Boeree of Shippensburg University, intelligence is a person's capacity to (1) acquire knowledge (i.e. learn and understand), (2) apply knowledge (solve problems), and (3) engage in abstract reasoning. It is the power of one's intellect, and as such is clearly a very important aspect of one's overall well-being. Psychologists have attempted to measure it for well over a century.
Several other ways of measuring intelligence have been proposed. Daniel Schacter, Daniel Gilbert, and others have moved beyond general intelligence and IQ as the sole means to describe intelligence.[133]
The American Psychological Association's report Intelligence: Knowns and Unknowns[66] states that IQ tests as predictors of social achievement are not biased against people of African descent since they predict future performance, such as school achievement, similarly to the way they predict future performance for people of European descent.[66]
However, IQ tests may well be biased when used in other situations. A 2005 study stated that "differential validity in prediction suggests that the WAIS-R test may contain cultural influences that reduce the validity of the WAIS-R as a measure of cognitive ability for Mexican American students,"[134] indicating a weaker positive correlation relative to sampled white students. Other recent studies have questioned the culture-fairness of IQ tests when used in South Africa.[135][136] Standard intelligence tests, such as the Stanford-Binet, are often inappropriate for children with autism; the alternative of using developmental or adaptive skills measures are relatively poor measures of intelligence in autistic children, and may have resulted in incorrect claims that a majority of children with autism are mentally retarded.[137]
A 2006 review article says that contemporary mainstream test analysis does not reflect substantial recent developments in the field and "bears an uncanny resemblance to the psychometric state of the art as it existed in the 1950s."[138]
In response to the controversy surrounding The Bell Curve, the American Psychological Association's Board of Scientific Affairs established a task force in 1995 to write a consensus statement on the state of intelligence research which could be used by all sides as a basis for discussion. The full text of the report is available through several websites.[66][139]
In this paper the representatives of the association regret that IQ-related works are frequently written with a view to their political consequences: "research findings were often assessed not so much on their merits or their scientific standing as on their supposed political implications".
The task force concluded that IQ scores do have high predictive validity for individual differences in school achievement. They confirm the predictive validity of IQ for adult occupational status, even when variables such as education and family background have been statistically controlled. They found that individual differences in intelligence are substantially influenced by genetics and that both genes and environment, in complex interplay, are essential to the development of intellectual competence.
They state there is little evidence to show that childhood diet influences intelligence except in cases of severe malnutrition. The task force agrees that large differences do exist between the average IQ scores of blacks and whites, and that these differences cannot be attributed to biases in test construction. The task force suggests that explanations based on social status and cultural differences are possible, and that environmental factors have raised mean test scores in many populations. Regarding genetic causes, they noted that there is not much direct evidence on this point, but what little there is fails to support the genetic hypothesis.
The APA journal that published the statement, American Psychologist, subsequently published eleven critical responses in January 1997, several of them arguing that the report failed to examine adequately the evidence for partly-genetic explanations.
There are social organizations, some international, which limit membership to people who have scores as high as or higher than the 98th percentile on some IQ test or equivalent. Mensa International is perhaps the most well known of these. There are other groups requiring a score above the 98th percentile.
Many websites and magazines use the term IQ to refer to technical or popular knowledge in a variety of subjects not related to intelligence, including sex,[140] poker,[141] and American football,[142] among a wide variety of other topics. These tests are generally not standardized and do not fit within the normal definition of intelligence. Intelligence tests such as the Wechsler Adult Intelligence Scale, Wechsler Intelligence Scale for Children, Stanford-Binet, Woodcock-Johnson III Tests of Cognitive Abilities, or the Kaufman Assessment Battery for Children-II, to name some of the best constructed, are not merely placing a test taker's score within the norm, as presumably are the thousands of alleged "IQ Tests" found on the internet, but they are also testing factors (e.g., fluid and crystallized intelligence, working memory, and the like) that were previously found to represent pure measures of intelligence using factor analysis. This claim may not be made for the hundreds of online tests marketing themselves as IQ Tests, a distinction that may be unfortunately lost upon the public taking them.
IQ reference charts are tables suggested by test publishers to divide intelligence ranges in various categories.
|